In the 'Top Ten National Treasures' annual selection initiated by the State-owned Assets Supervision and Administration Commission of the State Council, China Telecom's self-developed Xingchen Model made the list thanks to its groundbreaking technological achievements. As the first full-size, full-modal, and domestically produced foundational model system in the country, the Xingchen Model demonstrates exceptional capabilities in semantics, speech, vision, and multimodal fields. In the semantic domain, the Xingchen Model has achieved significant breakthroughs. Leveraging a fully domestic 10,000-card cluster and training framework, this model reaches over 93% of the computational efficiency of NVIDIA's equivalent power, with a training duration ratio that is more efficient.